Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve Generator Logging #2964

Merged
merged 29 commits into from
May 25, 2023
Merged

Improve Generator Logging #2964

merged 29 commits into from
May 25, 2023

Conversation

theunrepentantgeek
Copy link
Member

What this PR does / why we need it:

Improves the logging output from our code generator so it's a little less intimidating for users.

Primary change is to switch to ZeroLog. Given this is a commandline tool, the built in overhead of klog seems misplaced.

Closes #2853

We had a bunch of logging that I felt was no longer required - stuff that was likely useful while features were written and bedded in, but which were now unused and uninformative.

I've changed some rare warnings that seem to never happen into panic calls, so that we're aware of them when they do occur (obscure warnings are easy to lose in our current output).

If a panic happens during the execution of a pipeline stage, that's now caught and gracefully handled.

Much of the long period at the start (while loading OpenAPI documents) now has periodic logging so the user knows something is happening, instead of the app sitting there silently for 30-60s.

image

Also addresses some method deprecations and updates some dependencies.

Special notes for your reviewer:

We've got some technical debt lurking under the hood - places where things can go wrong and we currently panic, but where those issues are expected, and we really should just be returning an error. (And in this PR I've made some of that debt a little worse.) I tried to address it, but it all got out of hand, due largely to the viral nature of error returns in Go. I have a handle on breaking that down into smaller pieces of work and will create issues to track those as we clean things up.

How does this PR make you feel:
gif

@codecov-commenter
Copy link

codecov-commenter commented May 12, 2023

Codecov Report

Merging #2964 (33681c0) into main (f890e1f) will decrease coverage by 0.02%.
The diff coverage is 42.14%.

@@            Coverage Diff             @@
##             main    #2964      +/-   ##
==========================================
- Coverage   54.20%   54.19%   -0.02%     
==========================================
  Files        1374     1374              
  Lines      580648   580781     +133     
==========================================
+ Hits       314753   314765      +12     
- Misses     214109   214244     +135     
+ Partials    51786    51772      -14     
Impacted Files Coverage Δ
v2/tools/generator/internal/astmodel/allof_type.go 67.08% <0.00%> (ø)
v2/tools/generator/internal/astmodel/enum_type.go 82.14% <0.00%> (+0.72%) ⬆️
.../tools/generator/internal/astmodel/errored_type.go 38.31% <0.00%> (-0.37%) ⬇️
...rator/internal/astmodel/kubebuilder_validations.go 72.00% <0.00%> (ø)
.../generator/internal/astmodel/package_definition.go 9.34% <ø> (+0.17%) ⬆️
...tools/generator/internal/astmodel/resource_type.go 81.84% <ø> (+0.21%) ⬆️
...generator/internal/codegen/pipeline/add_secrets.go 51.64% <ø> (+0.56%) ⬆️
...erator/internal/codegen/pipeline/assemble_oneof.go 68.25% <ø> (+0.88%) ⬆️
...tor/internal/codegen/pipeline/check_for_anytype.go 98.36% <ø> (+6.05%) ⬆️
...n/pipeline/inject_property_assignment_functions.go 83.72% <ø> (-0.37%) ⬇️
... and 32 more

... and 14 files with indirect coverage changes

@matthchr
Copy link
Member

We've got some technical debt lurking under the hood - places where things can go wrong and we currently panic, but where those issues are expected, and we really should just be returning an error.

Not entirely sure I am parsing this correctly. Are you saying we have places where we panic but it's expected that we panic (i.e. the error is an expected error which we then handle?). If so agreed that's awkward and those would be good to fix.

OTOH I'm not all that concerned about places where we panic but we actually want that error to be fatal. I mean we could change those instances, sure - but if it's difficult/painful to do so I am not sure what value we're getting by doing it as at the end of the day the error is going to result in process termination anyway?

@theunrepentantgeek
Copy link
Member Author

theunrepentantgeek commented May 14, 2023

Are you saying we have places where we panic but it's expected that we panic (i.e. the error is an expected error which we then handle?)

We have places where we panic because we're encountering a situation that our code doesn't handle (due to the way Swagger has been used) rather than because our program has done something unexpected internally.

If someone is trying to configure a new resource, it's possible for our generator to simply blow up - and the user would have to try and troubleshoot the problem based purely on the stack trace.

When this happens to us, we run the generator under a debugger and can easily work out the problem. We have that option because we're intimately familiar with the code.

I don't expect that contributors would necessarily take the time to try and troubleshoot the generator, nor do I think that we should be expecting them to do so.

What I want to do is to address these panics by turning them into errors that provide reasonable information and context - then when something goes wrong, users have the information they need to try and resolve the problem, or at least they have good information to put into an issue so we can sort it out.

Plus, a program that blows up (for any reason) is likely to be perceived as low quality. One that fails with a detailed error message is better.

@@ -111,8 +110,6 @@ func NewAzureResourceType(specType Type, statusType Type, typeName TypeName, sco
}

if nameProperty == nil {
klog.V(1).Infof("resource %s is missing field 'Name', fabricating one...", typeName)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can still keep this and other few with a higher level of logging?

Copy link
Member Author

@theunrepentantgeek theunrepentantgeek May 24, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure what you mean. "Higher" as in more important (lower numbers) or "Higher" as in larger number?

ZeroLogr essentially only uses 0 (info), 1 (debug) and 2 (trace).

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah my bad, I mean we can retain some of these logs as lower-level logging.

@theunrepentantgeek theunrepentantgeek merged commit 53e2ef2 into main May 25, 2023
@theunrepentantgeek theunrepentantgeek deleted the improve/generator-logging branch May 25, 2023 20:01
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Development

Successfully merging this pull request may close these issues.

Improve logging output of aso-gen
4 participants